翻訳と辞書
Words near each other
・ Bag Raiders (album)
・ Bag Reza
・ Bag Skyerne
・ Bag tag
・ Bag valve mask
・ Bag, Hungary
・ Bag, Iran
・ Bag, Khash
・ Bag, Kohgiluyeh and Boyer-Ahmad
・ Bag, Qasr-e Qand
・ Bag-1 internal ribosome entry site (IRES)
・ Bag-a-Trix
・ Bag-in-box
・ Bag-o Abo
・ Bag-o Isi Island
Bag-of-words model
・ Bag-of-words model in computer vision
・ BAG1
・ BAG2
・ BAG3
・ BAG4
・ BAG5
・ Baga
・ Baga (grape)
・ Baga (king)
・ Baga Beach (film)
・ Baga Bogd
・ Baga Creek
・ Baga de Secretis
・ Baga language


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Bag-of-words model : ウィキペディア英語版
Bag-of-words model

The bag-of-words model is a simplifying representation used in natural language processing and information retrieval (IR). In this model, a text (such as a sentence or a document) is represented as the bag (multiset) of its words, disregarding grammar and even word order but keeping multiplicity. Recently, the bag-of-words model has also been used for computer vision.
The bag-of-words model is commonly used in methods of document classification, where the (frequency of) occurrence of each word is used as a feature for training a classifier.
An early reference to "bag of words" in a linguistic context can be found in Zellig Harris's 1954 article on ''Distributional Structure''.
== Example implementation ==

The following models a text document using bag-of-words.
Here are two simple text documents:

(1) John likes to watch movies. Mary likes movies too.


(2) John also likes to watch football games.

Based on these two text documents, a list is constructed as:

( "John",
"likes",
"to",
"watch",
"movies",
"also",
"football",
"games",
"Mary",
"too"
)

which has 10 distinct words. And using the indexes of the list, each document is represented by a 10-entry vector:

(1) (2, 1, 1, 2, 0, 0, 0, 1, 1 )
(2) (1, 1, 1, 0, 1, 1, 1, 0, 0 )

where each entry of the vectors refers to count of the corresponding entry in the list (this is also the histogram representation). For example, in the first vector (which represents document 1), the first two entries are "1,2". The first entry corresponds to the word "John" which is the first word in the list, and its value is "1" because "John" appears in the first document 1 time. Similarly, the second entry corresponds to the word "likes" which is the second word in the list, and its value is "2" because "likes" appears in the first document 2 times. This vector representation does not preserve the order of the words in the original sentences. This kind of representation has several successful applications, for example email filtering.〔

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Bag-of-words model」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.